With more homes having a smart speaker, people are accustomed to asking the speaker questions such as "how many ounces are three tablespoons" or "is it going to rain today." To make public transit part of this ecosystem, it would be valuable to provide a speaker skill to answer questions like "when is the next 39 bus" with a human description of the prediction arrival times: "the next inbound 39 buses arrive in 8 minutes and 20 minutes. the next outbound buses arrive in 6 minutes and 15 minutes."
4.3
Rating
0
Installs
AI & LLM
Category
This skill provides a clear concept for integrating transit predictions with smart speakers, with appropriate data sources identified (MBTA V3 API, Google Assistant, Amazon Alexa). However, the description lacks critical implementation details needed for a CLI agent to invoke it: no API endpoints, authentication methods, response parsing logic, or voice interaction flow are specified. The structure is clean but minimal. Novelty is moderate—while integrating multiple APIs requires effort, the core task (fetch predictions and format as speech) is relatively straightforward for modern tools. The skill would benefit from concrete examples of API calls, response handling, and speech synthesis formatting to be truly actionable.
Loading SKILL.md…